专利摘要:
Centered Bearing Fluid Turbine Flow Meter The invention relates to a fluid turbine flow meter comprising a measuring chamber [13], a turbine body (12) which is axially displaced as a function of fluid flow rates between a high position and a low position in the measuring chamber (13) and having a pivot axis (14), a centering bearing (15) for the pivot axis (14) in the (13) having a longitudinal body with a longitudinal passage (15) supported and pierced by the axis of rotation (14), the axis of rotation (14) being pivotal because it is axially held in the measuring chamber (13) by a first axial end stop (22) in the raised position and by a second axial end stop (26) in the down position, the centering bearing (15) having at least two support centering walls in the longitudinal passage (isa) cylindrical longitudinal sections for the axis of rotation (14 ).
公开号:BR102013025963A2
申请号:R102013025963-2
申请日:2013-10-08
公开日:2018-06-26
发明作者:Won Sung-Joon
申请人:Samsung Electronics Co., Ltd.;
IPC主号:
专利说明:

(54) Title: METHOD AND APPARATUS FOR PERFORMING A PRE-REGULATION OPERATION MODE USING
VOICE RECOGNITION (51) Int. Cl .: G10L 15/26; G06F 3/0484 (30) Unionist Priority: 08/10/2012 KR 102012-0111402 (73) Holder (s): SAMSUNG ELECTRONICS CO., LTD.
(72) Inventor (s): SUNG-JOON WON (74) Attorney (s): ORLANDO DE SOUZA (57) Summary: FLUID TURBINE FLOW METER WITH CENTERING BEARING The invention relates to a flow meter fluid turbine, comprising a measurement chamber [13], a turbine body (12) that is axially displaced, as a function of fluid flow rates, between a high position and a low position in the measurement chamber (13) and having an axis of rotation (14), a centering bearing (15) for the axis of rotation (14) in the measuring chamber (13) which has a longitudinal body with a longitudinal passage (15) supporting and passed through the axis of rotation (14), the axis of rotation (14) being pivoting as it is axially maintained in the measuring chamber (13), by a first axial end stop (22) in the elevated position and by a second axial end stop (26) in the low position, the centering bearing (15) having, in the longitudinal passage (iSa), at least two centering walls with long support cylindrical gitudinal lines for the axis of rotation (14).
1/40 .W, £
METHOD AND APPARATUS FOR EXECUTING A PRE-REGULATION OPERATION MODE USING VOICE RECOGNITION
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to a speech recognition technology used for a terminal and, more particularly, to a method and apparatus for performing a pre-set mode of operation using speech recognition, which recognizes an input voice command to execute a pre-set operating mode of a terminal.
2. Description of the Related Art
Recently, the functions of terminals have become diversified, and thus, terminals are implemented as multimedia devices having complex functions, such as photographing images or videos, reproducing those of music or video, playing games, receiving broadcasting and particle execution. In addition, terminals such as smartphones and tablet PCs are provided with touch screens to perform various functions of the terminals by entering text via touch input, scrolling and dragging.
In the execution of various functions of the terminal, users prefer simple control, and thus, several attempts of hardware or software have currently been made for the implementation of the terminal considering user preferences.
One of these several attempts helps in the execution of the terminal's functions by applying!
voice recognition to the terminal, and the voice recognition terminal is actively being searched and
2/40 under development.
Korean Patent Publication No. 2009-0020265 exposes a message modification function to which a voice signal is applied, such as introduction, movement, deletion, modification and searching for a message through speech recognition. In addition, Korean Patent Publication No. 2010-0064875 exposes a function of converting a user's voice into text through speech recognition and then displaying the text, and a function of executing a modification operation. text by selecting a user's voice, a touch or a key entry from command lists displayed by touching a part to be modified.
In the prior art, speech recognition through which operations are performed is used. Advanced speech recognition technologies are still being researched, but there is still no perfect speech recognition technology, which can precisely recognize a voice. Therefore, when a pre-regulated main operation is performed by the speech recognition application, the operation is occasionally not performed, due to an incorrectly recognized voice, and users are left with the inconvenience of these errors. That is, when an error is generated in the speech recognition corresponding to the main operation, many more control steps may be required, and therefore, it takes a long time to correctly execute the operation. Therefore, when the main operation is to be performed by applying voice recognition technology, it can be a little difficult to conveniently and widely apply a
3/40 speech recognition.
BACKGROUND OF THE INVENTION
The present invention was made to address at least the problems and / or disadvantages described above, and to provide at least the advantages described below.
Therefore, an aspect of the present invention is to provide a method and an apparatus for performing a pre-regulated mode of operation by using speech recognition, which can reduce the inconvenience due to a speech recognition error, which can be generated when a main action is to be performed by the application of speech recognition.
In accordance with an aspect of the present invention, a method of performing a pre-regulated operation by using speech recognition is provided. The method includes performing the pre-set operation in a pre-set operation mode according to a key entry or a touch input in the pre-set operation mode; and the recognition of the voice introduced during the execution of the pre-regulated operation of the pre-regulated operation mode and the assistance of the execution of the pre-regulated operation according to the recognized voice.
In accordance with another aspect of the present invention, an apparatus for performing a pre-regulated operation using voice recognition is provided, the apparatus including an input / output module including at least one button and a configured physical or virtual keyboard to receive a control input from the user; a touch screen configured to receive user control input and display a run image, a
4/40 state of operation and a menu state of an application program; and a controller configured to control the input / output module and the touch screen, the controller including a voice recognition module for recognizing voice input by the user through the microphone of the input / output module, the controller still configured for the execution of the pre-set operation according to a key input or a touch input from the touch screen, and the application of a recognized user voice received from the voice recognition module for assistance in executing the operation pre-regulated.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other aspects, features and advantages of the present invention will be more evident from the following detailed description taken in conjunction with the associated drawings, in which:
figure 1 is a block diagram of a mobile device according to the present invention;
figure 2. is a flowchart illustrating an application process of executing a mode of operation pre-regulated by the use of speech recognition according to an embodiment of the present invention;
figure 3 is a flow chart illustrating a process of modifying a text and assisting a pre-regulated operation by using speech recognition according to a first embodiment of the present invention;
figures 4A to 4C illustrate an example of assisting a text modification by using speech recognition according to the first modality of the present
5/40 invention; Figures 5A to 5C illustrate an example of assisting a pre-regulated operation by using speech recognition in accordance with a second embodiment of the present invention;
figures 6A to 6C illustrate an example of assisting a pre-regulated operation by using speech recognition according to a third embodiment of the present invention;
figure 7 is a flow chart illustrating a process for assisting a pre-set operation and a text modification by using speech recognition according to the second embodiment of the present invention;
figure 8 is a flow chart illustrating a process for assisting a text modification by using speech recognition in accordance with the third embodiment of the present invention;
figure 9 is a flow chart illustrating a process for assisting an entry operation in a list menu by using speech recognition according to a fourth embodiment of the present invention;
figures 10A to 10C illustrate an example of assisting in the operation of entering the list menu by using speech recognition according to the fourth embodiment of the present invention;
figures 11A to 11C illustrate an example of assistance in an operation of entering a list menu by using speech recognition according to a fifth embodiment of the present invention;
Figures 12A to 12C illustrate an example of
Assistance in an operation of entering a list menu by using speech recognition according to a sixth embodiment of the present invention;
figure 13 is a flow chart illustrating a process for assisting an initial screen edition by using speech recognition in accordance with the fifth embodiment of the present invention; and Figures 14A to 14E illustrate an example of assisting an initial screen edition by using speech recognition in accordance with the seventh embodiment of the present invention.
DETAILED DESCRIPTION OF MODALITIES OF THE PRESENT INVENTION
From this point on, various embodiments of the present invention will be described with reference to the associated drawings. In the description that follows, specific embodiments are provided and described, but they are provided merely to assist with a general understanding of the present invention. Therefore, it will be apparent to those skilled in the art that the specific modalities can be changed or modified, without departing from the scope of the present invention.
Figure 1 is a block diagram of a mobile device according to an embodiment of the present invention.
Referring to Figure 1, a device 100 includes a display unit 1.90 and a display controller 195. In addition, device 100 may include a controller 110, a mobile communication module 120, a subcommunication module 130, a multimedia module 140, a camera module 150, a GPS module 155, an input / output module 160, a sensor module 170, a
7/40 storage 175 and a power supplier 180. Subcommunication module 130 includes at least one of a wireless LAN module 131 and a near field communication (NFC) module 132, and multimedia module 140 includes at least at least one of an audio reproduction module 142 and a video reproduction module 143. The camera module 150 includes at least one of a first camera 151 and a second camera 152. From this point on, a case in which the unit display 190 and display controller 195 are a touch screen and a touch screen controller, representations, will be described as an example.
Controller 110 controls mobile communication module 120, subcommunication module 130, multimedia module 140, camera module 150, GPS module 155, input / output module 160, sensor module 170, a storage unit 175, power provider 180, touch screen 190 and touch screen controller 195. In addition, controller 110 includes a speech recognition module 111, which recognizes a voice input from a microphone 162 of input / output module 160. In addition, controller 110 receives user control from input / output module 160 or touch screen 190 to perform a pre-set operation, and assists with the execution of the pre-regulated operation by receiving a voice from the user from the voice recognition module 111 and applying the recognized voice. User control from input / output module 160 can be received by controller 110 via keyboard 166.
The mobile communication module 120 connects the
8/40 mobile communication device 100 with an external device by using one or a plurality of antennas (not shown) according to a controller control 110. The mobile communication module 120 transmits / receives a radio signal to a voice call, video call, short message service (SMS), or multimedia message service (MMS) with a mobile phone (not shown), a smartphone (not shown), a tablet PC, or a another device (not shown), which has a phone number entered on device 100.
The wireless LAN module 131 of the subcommunication module 130 can be connected to the Internet according to a controller control 110 in a place where a wireless access point (AP) (not shown) is installed. The wireless LAN module 131 supports a wireless LAN standard (IEEE802.llx) from the Institute of Electrical and Electronics Engineers. The NFC module 132 can wirelessly perform near-field communication between the handheld terminal 100 and an imaging device (not shown), according to a controller control
110.
Device 100 includes at least one of the mobile communication module 120, wireless LAN module 131 and NFC module 132. For example, device 100 includes a combination of mobile communication module 120, wireless LAN module wire 131 and NFC module 132 according to capacity.
The multimedia module 140 includes the audio reproduction module 142 and the video reproduction module 143, and may or may not include the audio communication module.
9/40 broadcast 141. In addition, the audio reproduction module 142 or the video reproduction module 143 of the multimedia module 140 can be included in the controller 110.
The input / output module 160 includes a plurality of buttons 161, a microphone 162 and a keyboard 166. The button 161 can be formed in a housing of the mobile device 100. The microphone 162 receives a voice or a sound for generating a electrical signal according to controller control 110.
Speaker 163 can extract sounds corresponding to various signals to the mobile communication module 120, the subcommunication module 130 and the multimedia module 140, or the camera module 150 to the outside of the device 100. The speaker 163 it can extract a sound corresponding to a function performed by the mobile device 100. A speaker 163 or a plurality of speakers 163 can be formed in a suitable position or in positions of the housing of the device 100.
Vibration motor 164 can convert an electrical signal to mechanical vibration according to controller control 110. For example, when device 100 in a vibration mode receives a voice communication from another device (no shown), the vibration motor 14 is operated. A vibration motor 164 or a plurality of motors 164 can be formed in the housing of device 100. Vibration motor 164 can operate in response to a touch action by a user touching the display unit 190 and successive touch actions on the 190 display unit.
The 165 connector can be used as an interface for
10/40 connection of device 100 and an external device, (not shown), mobile device 100 and a power source (not shown). Data stored on storage unit 175 of device 100 can be transmitted to the external device (not shown) or data can be received from the external device (not shown) via a wired cable connected to the connector
165 according to a controller control 110. Power can be input from the power source (not shown) via the wired cable connected to connector 165 or a battery (not shown) can be charged.
Keyboard 166 receives key input from the user for control of device 100. Keyboard 166 includes a physical keyboard (not shown) formed on device 100 or a virtual keyboard (not shown) displayed on display unit 190. The physical keyboard (not shown) formed on the device 100 can be deleted, according to the capacity or structure of the device 100.
A headset (not shown) is inserted into the headset connection jack 167, so that the headset connection jack 167 can be connected to device 100.
The storage unit 175 can store a signal or data input / output according to the operations of the mobile communication module 120, the subcommunication module 130, the multimedia module 140, the camera module 150, the GPS module 155 , the input / output module 160, the sensor module 170 and the control unit
11/40 display 190. Storage unit 175 can store a control program and applications for controlling device 100 or controller 110.
The term storage unit includes storage unit 175, a ROM and RAM (not shown) in controller 110, or a memory card (not shown) (for example, an SD card and pen drive) installed in device 100 The storage unit can include a non-volatile memory, a volatile memory, a hard disk drive (HDD), or a solid state drive (SSD).
The touch screen 190 receives a user control and displays an execution image, an operating state and a menu state of an application program.
The touch screen 190 provides the user with a user interface corresponding to various services (for example, telephone communication, data transmission, broadcasting, taking a photo, etc.). The touch screen 190 transmits an analog signal corresponding to at least one touch input to the user interface to the touch screen controller 195. The touch screen 190 receives at least one touch through a part of the user's body ( for example, fingers, including a thumb) or a touchable input device. In addition, the touch screen 190 receives successive one-touch actions from at least one touch. The touch screen 190 transmits the analog signal corresponding to the successive touch actions introduced to the touch screen controller 195.
The touch screen 190 can be implemented, for example, in a resistive type, a capacitive type, a type
12/40 infrared or a type of acoustic wave.
The touch screen controller 195 converts the analog signal received from the touch screen 190 into a digital signal (for example, the X and Y coordinates) and then transmits the digital signal to controller 110. Controller 110 controls the touch screen 190 by using the digital signal received from the touch screen controller 195. For example, controller 110 may allow a shortcut icon (not shown) displayed on the touch screen 190 to be selected or executed in response to the touch. In addition, touch screen controller 195 can be included in controller 110.
The touch screen 190 may include at least two touch screen panels which detect a touch or approach to a user's body part or a touchable input device so as to simultaneously receive input from the user's body part and of the touchable input device. At least two touchscreen panels provide different output values for the 195 touchscreen controller, and the 195 touchscreen controller differently recognizes the values entered from at least two touchscreen panels to determine whether the Input from the touch screen is input from the user's body part or input from the touchable input device.
A method of performing a pre-regulated operation in a pre-regulated operation mode by using speech recognition in accordance with the present invention can be divided into two processes, as described below. The first process is a process of executing an operation
Main 13/40 in which a preset operation of an operating mode is performed according to a key input or a touch input in a preset operating mode. The second process is a main operation assistance process, in which a voice input during a pre-set operation of the operating mode in the first process is recognized and then the operation of the first process is assisted according to the recognized voice.
Figure 2 is a flowchart that illustrates a process of performing a pre-set operation in a mode of pre-set operation by using speech recognition according to a feature of the present invention.
With reference to figure 2, a pre-set operation is selected from a plurality of pre-set operation modes of the device 100 through a user control input using one of the button
161, the keyboard 166 and the microphone 162 of the input / output module 160, or the touch screen 190 and then the preset operating mode selected in step 200. After executing the preset operating mode of the device 100, in step 202, a key input or a touch input by the user to perform the pre-set operation of the pre-set operation mode is performed. In step 204, a pre-set operation according to the touch input in step 202 is performed.
In step 206, a voice is received from the user through microphone 162, while the pre-set operation is performed. After that, in step 208, a speech recognition operation recognized by the speech recognition application of the speech recognition module
14/40
111, and assistance in executing the pre-regulated operation is performed using the recognized voice.
At this time, the preset operating mode includes several operating modes performed by a user control entry, such as, for example, a text writing mode, a list menu execution mode and an execution mode home screen. Also, the pre-set operation includes a text input, a touch input and scroll and drag in the touch input application.
The term touch means an operation in which the user contacts a particular area of the touch screen by using a body part or a touchable input device and then removing the body part or touchable input device which contacts the particular area, or a flick action, in which the user contacts a particular area of the touch screen by using a body part or a touchable input device and then removes the body part or the input device touchable in a direction of movement at the terminal.
The method of carrying out the pre-regulated operation by using speech recognition in accordance with the present invention will be described below, in greater detail, through the modalities described below.
The method of carrying out the present operation by using speech recognition according to the present invention in a text writing mode, which is one of the pre-regulated modes of operation, will be described. Figure 3 is a flow chart that illustrates a process of modifying a text and assisting a pre-regulated operation using
15/40 of speech recognition according to the first embodiment of the present invention. An operation of simultaneously entering a text and a voice in the text writing mode and assisting with a modification of the text introduced by using the voice input will be described.
With reference to figure 3, steps 300 to 306 are steps for performing the pre-set operation according to a key entry or touch input in the pre-set operation mode in the first process of figure 2, in which an operation main way of executing the text writing mode and displaying the text entered in a pre-set text display window will be described.
First, the text writing mode is performed via a user control input in step 300, and a speech recognition function from speech recognition module 111 is activated in step 302. At this point, the voice recognition function Voice recognition of the voice recognition module 111 can be automatically activated simultaneously with the execution of the text writing mode or can be activated by a user selection. After that, it is determined whether the text is entered via the physical or virtual keyboard 166 according to a key entry or a touch entry in step 304. When the text is entered in step 306, the text entered is displayed in a window pre-set text display (i), as seen in figure 4A..If not, if there is a voice input it is checked in step 308.
The remaining steps correspond to the second process of figure 2 of recognition of the voice input during the execution of the pre-set operation of the voice mode.
16/40 pre-regulated operation of the first process in figure 2 to assist in the execution of the operation of the first process according to the recognized voice.
In step 308, it is determined whether a voice is entered via microphone 162. The voice can be entered in all cases, just as the text is not entered, the text is being entered, or the text is entered and then displayed in text writing mode. From this point on, in all cases, where the text is not entered, the text is being entered, or the text is entered and then displayed in the text writing mode, the process will be described with reference to the case in which the text is being entered. When a voice is entered from the user while text is being entered in text-writing mode, the activated speech recognition module 111 recognizes the voice entered in step 310. If no voice is entered, a determination will be made by the user as to whether the text writing mode should end at step 319.
Steps 311 to 318 describe the text modification assistance operation. That is, in step 311, the displayed text is compared with the recognized voice, as the user reads the displayed text aloud, which is entered through the microphone 162. When the displayed text is not identical to the recognized voice, it is determined in step 312 that the displayed text has an error. When the displayed text has an error, the recognized voice is converted to text to assist in modifying the displayed text in step 314. When the displayed text is identical to the recognized voice, it is determined that the displayed text has no error in step 312 .
17/40
As a result of determining whether the entered text has an error in step 312, when the displayed text has an error, the recognized voice is converted to the text in step 314, and the converted voice text is displayed, as seen in figure 4A and a pre-set voice assistance window (j), in step 315. The voice assistance window (j) is set to be distinguished from the text display window (i), and is located and is displayed adjacent to the text display window (i) on an upper, lower, left or right side.
After that, the user identifies the contents of the text display window (i) and the contents of the voice assistance window (j) and determines, in step 317, whether to change the contents of the text display window (i ) for the contents of the voice assistance window (j). When the user wants to change the contents of the text display window (i) to the contents of the voice assistance window (j), the user converts the text displayed in the text display window (i) into the converted speech text displayed in the voice assistance window (j) by applying a pre-set operation to change the content of the text display window (i) to the content of the voice assistance window (j) through a user control in step 318. The preset operation for changing the content of the text display window (i) to the content of the voice assistance window (j) can be regulated as an item in the text writing mode, an instant window on the screen ringtone 190 or a pre-set operating command input using microphone 162. If the function is set as an item in text writing mode or as
18/40 a pop-up window, the user will enter the command using button 161 on keyboard 166.
the user then selects whether to end the text writing mode in step 319. When the user does not want to end the text writing mode, the text writing mode does not end and the process returns to step 304. When the user selects to finish the text writing mode, the text writing mode ends.
In step 312, if the text entered is identical to the recognized voice, that is, there is no error, steps 320 to 324 will be performed, which describe the execution of a pre-set operation command. That is, as a result of determining that the entered text has an error in step 312, when the displayed text has no error and a recognized voice is a pre-set operation command, steps 320 to 324 describe the execution of the operation pre-regulated by the application of the recognized voice.
When the displayed text has no error in step 312, it is determined whether the recognized voice is the pre-set operation command in step 320. When the recognized voice is the pre-set operation command, the pre-set operation command is performed by applying the recognized voice in step 322. When the pre-set operation command is completely performed, an operation result is extracted in step 324. At this time, the execution of the operation command must not interrupt an additional text entry and the display of the entered text. That is, the text entry can be performed simultaneously with the voice input and the recognition of the entered voice. Also, when text is entered while the
19/40 pre-set operation of the input and the recognized voice is performed, the text can be displayed. In addition, in step 320, a voice with no meaning for the recognized voice, which has no similarity with the displayed text and the pre-set operation command is not applied when speech recognition is applied.
After step 324, the user selects whether to end the text writing mode in step 319 if the recognized voice is not similar to the displayed text and the operation command pre-set in step 320. The user also selects whether it is to end the text writing mode in step 319, if the recognized voice is not similar to the text displayed in step 320. When the user does not want to end the text writing mode, the text writing mode does not end and the The process returns to step 304. When the user selects to end the text writing mode, the text writing mode ends.
Figures 4A to 4C illustrate an example of assistance in writing text for modifying the text by using speech recognition in accordance with the first embodiment of the present invention.
Under the assumption that the user wants to enter a text from Damian took a new Nike sneaker, the text writing mode is first performed via a user control. Figure 4A illustrates an image on the terminal including the text display window (i) and the voice assistance window (j). The text display window (i) displays the entered text and the result of executing the text writing operation. Still, the window
20/40 voice assistance (j) converts the voice entered into the text and displays the converted text, and displays the status of the operation.
After executing the text writing mode, the text is entered through the physical or virtual keyboard 166. Simultaneously with or after the text entry, a voice having the same content as the text is entered through the microphone 162 by the user reading the text out loud. In figure 4B, the entered text is displayed in the text display window (i), and the entered voice is converted to text and displayed in the voice assistance window (j). The text displayed in the voice assistance window (j) corresponds to an image generated by recognizing the voice introduced through speech recognition, converting the voice into text and then displaying the text in the voice assistance window (j). Text entry while the voice is entered by the user can be preset to be distinguished from the user's voice by underlining or highlighting.
After that, when the user wants to change the contents of the text display window (i) to the contents of the voice assistance window (j), the user can change the text displayed in the text display window (i) to the converted voice text displayed in the voice assistance window (j) by applying the pre-set function through a god selection. The pre-set function can be a pre-set item that allows the user to select an application which changes the content of the text display window (i) to the content of the voice assistance window (j), the pre-set function can be set to be displayed on the
21/40 touch screen 190 as an instant window, or the preset function can be a preset voice command. Figure 4C illustrates an image generated by changing the content of the text display window (i) to the content of the voice assistance window (j).
Figures 5A to 5C illustrate an example of assistance in a pre-regulated operation using speech recognition, according to a second embodiment of the present invention. An operation to perform assistance from the pre-set operation during a text entry using speech recognition will be described with reference to figures 5.
Under a hypothesis that the user want to identify your current position while introduces a text, the user first select and runs the mode of writing in text through a control of user. THE figure 5A illustrates an image on terminal including the j ring in text display (i) and thebefore the voice input. window of assistance of voice (j),
After executing the text writing mode, a text is entered into the terminal via the physical or virtual keyboard 166. Simultaneously with the text input, the user introduces a voice command of Attach my current location, which is a pre-regulated operation introduced through microphone 162. As shown in figure 5B, an image generated by the display of text entered with Damian picked up a new Nike shoe in the text display window (i) and a text with Attaching your current location, the expressing the execution of an operating command by recognizing a voice
22/40 entered in the voice assistance window (j) is shown.
At this point, the user can continuously enter the text. When an execution of the pre-set operation command through the voice input is completed, a user position is extracted through an application related to the position stored in the terminal or an execution of a navigation application showing the user position as a result of the operation of the system. The user position can be determined by the GPS module 155 in figure 1. Figure 5C illustrates an example of a displayed result image generated by running a map showing a user position which is an output result from that displayed in the window. text display (i), after the pre-set operation command is completely executed.
Figures 6A to 6C illustrate an example of assisting a pre-regulated operation by using speech recognition in accordance with a third embodiment of the present invention. An operation to assist the operation pre-regulated by the use of speech recognition, while the text is entered will be described in detail with reference to figures 6.
Figure 6A illustrates an example of a displayed result image generated by running a map showing a user position which is an output result, as shown in figure 5C. First, the user enters a voice command via microphone 162 corresponding to a pre-set operation command of Move a cursor behind 'store', as shown in figure 6A. The command is for moving the cursor in the text to a position following the word store in the text
23/40 shown in the text display window (i).
After that, the entered voice is recognized using a voice recognition, and the operation command is carried out according to the recognized voice of Move a cursor behind 'store'. Figure 6B illustrates an image generated by the movement of the position of a cursor according to the operation command of the recognized voice. Move a cursor behind 'store'.
Then, the user enters a Enter 'right now' voice command corresponding to a pre-set operation command, the entered voice is recognized using voice recognition, and the operation command is performed according to recognized voice. Figure 6C illustrates an image showing a result generated by entering the text right now in a position where the cursor is located, according to the recognized operation command of Enter 'right now'.
In addition, the pre-set operation command in the text-writing mode can be set to operate pre-set functions for the terminal by the entered voice, such as playing music or a video, searching the Internet and performing a a particular application. In addition, a result of the operation can be displayed directly in the text window or it can appear as an instant window.
Figure 7 is a flow chart illustrating a process for assisting a pre-regulated operation and a text modification by using speech recognition according to the second embodiment of the present invention. An operation of inputting a voice and a text simultaneously in the
24/40 text writing, performing a pre-set operation by using the entered voice and assistance in modifying the text introduced by using the entered voice will be described.
With reference to figure 7, steps 400 to 406 are steps for performing a pre-set operation according to a key entry or touch input in the pre-set operation mode of the first process of figure 2, which describe an operation of executing the text writing mode and displaying the text entered in a pre-set text display window.
First, the text writing mode is performed via a user control input in step 400, and a speech recognition function from speech recognition module 111 is activated in step 402. At this point, the Speech recognition of the speech recognition module 111 can be activated automatically simultaneously with the execution of the text writing mode, or it can be activated by a user selection. After that, it is determined whether the text is entered via the physical or virtual keyboard 166, according to a key entry or a touch entry in step 404. When the text is entered, the entered text is displayed in a display window pre-set text (i) in step 406. If not, a voice input is determined in step 408.
The remaining steps are processes corresponding to the second process of figure 2 for recognition of the voice input during the execution of the pre-regulated operation of the pre-regulated operation mode, as described in figure 2, to assist in the execution of the operation of the
25/40 first process according to the recognized voice.
It is determined whether a voice is entered via microphone 162 at step 408. Voice input at this point can be done in all cases, such as where text is not being entered, text is being entered or text is being entered and, then displayed in text writing mode. From this point on, in all cases, where the text is not entered, the text is being entered, or the text is entered and then displayed in the text writing mode, will be described with reference to the case in which the text is being introduced. When a voice is entered from the user, while the text is being entered in the text input mode, the activated speech recognition module 111 recognizes the voice introduced in step 410. If there is no voice input, the user will determine if it is to finish the text writing mode in step 418.
Steps 412 to 416 describe the execution of the pre-set operation command. When the recognized voice is the pre-set operation command, the pre-set operation command is executed by the application of the recognized voice.
It is determined whether the recognized voice is the pre-set operation command in step 412. When the recognized voice is the pre-set operation command, the pre-set operation command is performed by applying the recognized voice in step 414. When the execution of the pre-set operation command is completed, a result of the operation execution is extracted in step 416
At this time, the execution of the operating command must not interrupt the text input and the text display
26/40 introduced. That is, the text input can be performed simultaneously with the voice input and the recognition of the entered voice. In addition, when the text is entered while the pre-set operation command of the entered and recognized voice is executed, the text can be entered.
The user selects whether to end the text writing mode in step 418. When the user does not wish to end the text writing mode, the process returns to step 404. When the user selects to end the text writing mode, the text writing mode ends. .
Steps 420 - 429 describe the text modification assistance operation. That is, in step 412, if it is determined that the recognized voice is not the pre-set operating command, the possibility of an error in the entered text is analyzed by comparing the displayed text with the recognized voice in step 420. When the text displayed is not identical to the recognized voice, it is determined that the displayed text has an error, and thus the recognized voice is converted to text and a modification of the displayed text is assisted.
After that, it is determined whether the entered text has an error in step 422. That is, when the displayed text is not identical to the recognized voice by comparing the displayed text and the recognized voice, it is determined that the displayed text has an error . When the displayed text has an error, the recognized voice is converted to the text in step 424, and the converted voice text is displayed in the preset voice assistance window (j) in step 426. After
27/40 this, the user identifies the contents of the text display window (i) and the contents of the voice assistance window (j) and determines whether to change the contents of the text display window (i) to the . voice assistance window content (j) in step 428. When the user wants to change the text display window content (i) to the voice assistance window content (j), the user converts the text displayed in the text display window (i) in the converted voice text displayed in the voice assistance window (j) by applying a pre-set function to change the contents of the text display window (i) to the content of the assistance window of voice (j) via a user control input in step 429, which can be done by any of the buttons 161, keypad 166 or a voice command input via microphone 162. The pre-set function for changing the contents of the text display window (i) to the content of the voice assistance window (j) can be set as an item in the text writing mode, an instant window on the touch screen 190 or an entry of a pre-set voice command. In addition, a voice without meaning of the recognized voice, which has no similarity with the displayed text and the pre-set operation command is not applied, when the voice recognition is applied.
After step 429, the user selects whether to end the text writing mode in step 418. When the user does not want to end the text writing mode, the process returns to step 404. When the user selects to end the text writing mode text writing, the text writing mode ends.
28/40
In addition, in step 422, if there is an error in the text, the user will select whether to end the text writing mode in step 418. Also, in step 428, if the user decides not to change the content of the display window of text (i) for the contents of the voice assistance window (j), the user will select whether to end the text writing mode in step 418.
Figure 8 is a flow chart illustrating a process for assisting in a text modification by using speech recognition in accordance with a third embodiment of the present invention. An operation of simultaneously entering a text and a voice in the text writing mode and assisting in an error modification of the text introduced by using the entered voice will be described.
The operation of figure 8 is identical to the operation of assistance in modifying the text introduced by using the voice introduced in the text writing mode of figure 3. Therefore, the operation of assistance in modifying the text by using voice recognition will be briefly described.
First, the text writing mode is performed through a user control introduced in step 500, and a speech recognition function of the speech recognition module 111 is activated in step 502. After that, it is determined whether the text is entered via the physical or virtual keyboard 166 according to a key entry or touch entry in step 504. When the text is entered, the entered text is displayed in a pre-set text display window (i) in step 506. If not, the voice input is determined in step 508.
29/40
After that, it is determined whether a voice is entered through the microphone 162 in step 508. When a voice is entered while text is being entered in text writing mode, the activated speech recognition module 111 recognizes the voice introduced in step 510. If there is no voice input, the user will determine if the text writing mode in step 520 is to end.
Then, the possibility of the error of the entered text is analyzed by comparing the displayed text with the recognized voice in step 511, and it is determined if the entered text has an error in step 512. That is, when the displayed text is not identical to the recognized voice by comparing the displayed text with the recognized voice, it is determined that the displayed text has an error. When the displayed text is identical to the recognized voice, it is determined that the displayed text has no error, and the user determines whether it is approximately to finish the text writing mode in step 520.
As a result of determining whether the text entered has an error in step 512, when the displayed text has an error, the recognized voice is converted to text in step 514, and the converted text from voice is displayed in a voice assistance window preset (j) in step 515. The user identifies the contents of the text display window (i) and the contents of the voice assistance window (j) and determines whether to change the contents of the text display window (i) for the contents of the voice assistance window (j) in step 517. When the user wishes to change the contents of the text display window (i) to the contents of the voice assistance window (j), the user converts text
30/40 displayed in the text display window (i) in the converted voice text displayed in the voice assistance window (j) by applying a pre-set function to change the contents of the text display window (i) to the contents of the voice assistance window (j) through a user control entry in step 519, as described above.
After that, the user selects whether to end the text writing mode in step 520. When the user does not want to end the text writing mode, the text writing mode does not end and the process returns to step 504. When the user selects to end the text writing mode, the text writing mode ends.
The method of assisting in the execution of the operation by using the speech recognition according to the present invention in an execution of a list menu, which is one of the pre-set modes of operation, will be described.
Figure 9 is a flow chart illustrating a process for assisting in an operation in entering a list menu by using speech recognition according to a fourth embodiment of the present invention. An operation of simultaneously inputting a pre-set operation command and a voice on a screen displaying the list and assistance in executing the pre-set operation command using the entered voice will be described.
Referring to figure 9, steps 600 to 603 are steps for performing the pre-set operation of the operating mode according to a key entry or a touch input in the pre-set operating mode in the first process of figure 2.
31/40
First, the list menu is executed via a control input in step 600, and lists from the executed list menu are displayed in step 601. At this point, the voice recognition function of the voice recognition module 111 can be automatically activated or can be activated by a user selection using buttons 161, keyboard 166 or microphone 162. After that, it is determined whether there is a touch input on touch screen 190 in step 602. If not, it is determined whether the voice is entered in step 604. When there is touch input on touch screen 190, a touch input operation is performed in step 603. The touch input at this point is a scroll touch input, which corresponds to an entry in a peteleco operation in which the user contacts a particular area of lists displayed by using a body part or a touchable input device and then removes the body part or touchable input device from the particular area and m a direction of movement. At this time, the displayed lists can be scrolled in an up, down, left or right direction according to a scrolling direction.
The remaining steps correspond to the second process of figure 2 of recognition of the voice input during the execution of the pre-set operation of the operation mode of the first process of figure 2, in which a service operation of a touch operation input in a list menu is described.
In step 604, it is determined whether a voice is introduced through microphone 162. If not, the user determines whether
32/40 is to end the list menu mode in step 609. The voice input at this time can be done in all cases, such as where the ringtone is not entered, the ringtone is being entered and a preset operation command is being performed during touch input, when the list menu lists are displayed. From this point on, all of these cases in which the ringtone is not entered, ringing is being entered and a pre-set operation command is being performed during the ring entry, when the list menu lists are displayed, will be described with reference to the case in which the touch operation is being performed. When a voice is entered by the user while a roll operation of the touch operation is performed in the state in which the lists
are displayed, the module in recognition of voice activated 111 recognizes the voice introduced in step 605. It is determined whether The recognized voice it's the command of pre-regulated operation at step 606. Case not, the user
determines whether to terminate the list menu mode in step 609. A recognized voice which has no similarity to the preset operation command is not applied. When the recognized voice is the preset operation command, the preset operating command of the recognized voice is executed during the execution of the touch operation in step 607, and a result of the execution of the operation command is extracted in step 608. The pre-set operation command can be a command set to automatically perform the scroll operation from the display list to a desired position in the preset direction up, down, left or right.
33/40 right. In addition, the command set to automatically perform the scroll operation from the list to the desired position can include a command set to automatically perform a scroll operation to a position of one or more words, a string, or a phrase, a position of part of all lists, and a position of a language for each country.
After that, it is determined whether the list menu is to be terminated through a user selection in step 609. When the user wants to perform the operation continuously, the list menu does not end and the process returns to step 602. When the menu list ends via user selection, a screen displaying the list menu ends.
Figures 10 to 12 are examples of assisting an entry operation in the list menu by using speech recognition, according to the fourth, fifth and sixth modalities of the present invention, respectively. The execution of the entry operation in the list menu using speech recognition will be described in detail with reference to figures 10 to 12.
First, the list menu is executed via user selection, and the lists from the executed list menu are displayed. Figures 10A, 11A and 12A are images of screens on which the list menu is executed and then the lists are displayed.
When the lists are displayed, a scrolling operation is performed in a direction shown by the arrow in figure 10A, figure 11A and figure 12A through a user peteleco input. Figure 10B, Figure 11B and
34/40 figure 12B are screen images displaying particular images when the displayed lists are scrolled.
A preset command is performed by the posture of a voice from the preset command during the scrolling operation. Figure 10C illustrates an image of a screen in which a scrolling operation is performed on the list to a part where the items in the list beginning with the letter J begins, when a voice from the preset command of Until J is entered. In addition, figure 11C illustrates an image of a screen in which a scrolling operation is performed to a part where a center of the entire list begins, when a voice from the preset command of Up to half of the list is introduced. In addition, figure 12C is an image of a screen on which a scrolling operation is performed for a part of the entire list where the items in the list exist in the Korean language begins when a voice from the preset command of Until Korean begins is introduced.
The method of assisting in performing the operation by using speech recognition in accordance with the present invention when an initial screen which is one of the user operating modes is edited will be described.
Figure 13 is a flow chart of assistance in an initial screen edition by using speech recognition in accordance with a fifth embodiment of the present invention. An operation of simultaneously entering a preset operating command and a voice on a home screen
executed and assistance in an execution of the command in operation pre-regulated by use of the voice introduced is described. With reference to figure 13, the steps 700 to 703 are
35/40 'f- steps for performing the pre-set operation of the operating mode according to a key entry or a touch input in the pre-set operating mode in the first process of figure 2.
First, a splash screen is performed through a user control entry in step 700, and a splash screen page is displayed in step 701. The splash screen is pre-set to include pre-set splash screen pages. including pre-set items, where one or more home screen pages can be pre-set. Also, the voice recognition function of the voice recognition module 111 can be automatically activated or can be activated by a user selection using the buttons 161, the keyboard 166 or the microphone 162. After that, it is determined if there is an input of touch the touch screen 190 in step 702. When there is touch input on touch screen 190, a touch input operation will be performed in step 703. If not, the process will end.
The touch input at this point is a drag touch input, which corresponds to an entry for a flick operation in which the user contacts one or more items in particular on a page displayed on the home screen by using a body part. or a touchable input device and then remove the body part or touchable input device from the particular items contacted in a direction of movement. At this point, items can be moved from one page on the home screen to another page on the home screen by dragging in an up, down, left, or right direction.
36/40 ζ according to a drag direction.
The remaining steps correspond to the second process of figure 2 of recognition of the voice input during the execution of the pre-set operation of the operation mode of the first process of figure 2.
It is determined whether a voice is entered via microphone 162 in step 704. Voice input via microphone 162 can be made in all cases - when a ring is not entered, a ring is being entered, and a pre-operation command -regulated is running during touch input, when the home screen is displayed. From this point on, in all cases where the touch is not entered, the touch is being introduced, or the pre-set operation command is being executed during the touch input, when the initial screen is displayed, it will be described with reference to the case in which the touch operation is being performed. When a voice is entered by the user while the touch operation is performed, when the initial screen is displayed, the activated speech recognition module 111 recognizes the voice entered in step 704. If there is no voice input, the process will end.
After that, it is determined whether the recognized voice is the pre-set operation command in step 706. At this time, a recognized voice which is not similar to the pre-set operation command is not applied. If the recognized voice is not the pre-set operation command, the process will end.
A determination as to whether the recognized voice is the pre-set operation command in step 706 will be described
37/40 in detail. First, when a drag is carried out to another page of the pre-regulated home screen by applying a touch to a pre-set item, it is determined whether there is a space in which the dragged item can be located on the home screen page. When the item is moved through a touch input and there is a space for placing the item when the touch ends, dragging the item means placing the item. When there is no space to place the item, dragging the item means returning the item to an original location. When there is no space to place the dragged item on the home screen page, it is determined whether the recognized voice in a dragged state is the pre-set operation command. When the recognized voice is the pre-set operation command, the recognized voice pre-set operation command is executed during the execution of the touch operation in step 707, and a result of the execution of the operation command is extracted in step 708 The pre-set operation command can be a pre-set operation command which moves a pre-set item from one page of the pre-set home screen to another page. In addition, the pre-set operation command can be a pre-set command, which generates a new home screen page. As a result of executing the pre-set operation command of the recognized voice, when there is space for placing the item, an applied touch ends through an operation to remove a user's body part or a touchable input device from the item , and the item is placed on a home screen page where the ringtone ends.
After that, based on whether the touch input of the
38/40 user is performed., Operations from step 702 can be performed again, when the user wishes to continuously perform operations, and the home screen editing ends when the user does not perform touch input in step 709.
Figures 14A to 14E illustrate an example of assistance in editing the home screen by using speech recognition in accordance with the seventh embodiment of the present invention.
First, when the home screen is executed by a user control entry, the executed home screen is displayed. Figure 14A illustrates an image of a screen on which the home screen is executed and then displayed, and figure 14B illustrates an image of an editing mode screen of the home screen, when a touch is entered and a touch state. is held. It is assumed that the initial screen of figure 14A and figure 14B is page 1.
When the home screen is displayed, an item on the home screen is dragged to another page on the home screen via touch input by the user. Figure 14C illustrates an image in which the item on the home screen is dragged to another page on the home screen via touch input by the user. It is assumed that the home screen in figure 14C is page 3 and there is no space to place a new item on the screen on page 3. As shown in figure 14C, when there is no space to place the new item on the home screen page for which the item is dragged, a message informing the user that there is no space is displayed on the screen, or the user is informed of the fact that there is no space through a voice, through which the
39/40 user can identify that there is no space. The user enters the pre-set operation command via the voice in a drag operation state, and the terminal executes the pre-set operation command by recognizing the entered voice. For example, the user can move all items on page 3 to another page on the home screen with a voice command of Move all items to the next page. Figure 14D illustrates an image in a touch and drag state where the pre-set operation command is executed, and all items on page 3 are moved to another page on the home screen. Figure 14E illustrates an image in a touch and drag state where the touch ends, an icon is placed in a position where the touch ends, and an editing mode ends.
In addition, the operation command pre-set on the home screen can be set to perform functions such as deleting and copying the item or set to move only a few items selected by the user, when the user wants to move the items.
It can be appreciated that the modalities of the present invention can be implemented in software, hardware or in a combination thereof. Any such software can be stored, as described above, for example, on a volatile or non-volatile storage device, such as a ROM, a memory, such as a RAM, a memory chip, a memory device, or a Memory IC, or a recordable optical or magnetic medium, such as a CD, a DVD, a magnetic disk or a magnetic tape, regardless of its ability to be erased or its ability to be rewritten. It can also be appreciated that
40/40 the memory included in the mobile terminal is an example of machine-readable devices suitable for storing a program including instructions that are executed by a processor device in order to implement the modalities of the present invention. . Therefore, the embodiments of the present invention provide a program including codes for implementing a claimed system or method in any claim of the associated claims and a device that can be read on a machine for storing such a program. In addition, this program can be electronically converted by any means, such as a communication signal transferred via a wired or wireless connection, and the embodiments of the present invention suitably include equivalents thereto.
Although the present invention has been shown and described with reference to certain modalities thereof, it will be understood by those skilled in the art that various changes in form and details can be made there, without departing from the spirit and scope of the present invention, as defined by the appended claims.
1/9
权利要求:
Claims (9)
[1]
1. Method of carrying out a pre-regulated operation using voice recognition, the method characterized by the fact that it comprises:
performing the pre-set operation in a pre-set operation mode according to a key entry or a touch input in the pre-set operation mode; and the recognition of a voice introduced during the execution of the pre-regulated operation of the pre-regulated operation mode and the assistance in the execution of the pre-regulated operation according to the recognized voice.
[2]
2. Method, according to claim 1, characterized by the fact that the pre-regulated operation mode corresponds to a text writing mode, and the execution of the pre-regulated operation comprises the display of an input text according to key input or touch input in text writing mode in a text display window, and assistance with the preset operation comprises recognizing the voice input while displaying the text input according to the input keystroke or touch input in the text display window, the comparison of the displayed text with the recognized voice, the determination that the displayed text has an error when the displayed text is not identical to the recognized voice, the determination that the displayed text there is no error when the displayed text is identical to the recognized voice, and the conversion of the recognized voice to text and the modification of the displayed text when it is determined that the displayed text has the error, and when it is determined that the displayed text does not t in
2/9 error,
determine whether the recognized voice it's a command in pre-regulated operation; and the execution of the pre-regulated operation by application gives recognized voice when the recognized voice is the command in pre-regulated operation. 3. Method, of according to claim 1,
characterized by the fact that the preset operating mode is a text writing mode, and the execution of the preset operation comprises the display of an input text according to a key entry or a touch input in the writing text in a text display window, and the assistance of the preset operation understanding the recognition of the entered voice while displaying the input text according to the key input or touch input in the text display window , determine if the recognized voice is a pre-regulated operation command, perform the pre-regulated operation by applying the recognized voice when the recognized voice is the pre-regulated operation command, comparing the displayed text with the recognized voice, when the voice Recognized is not the pre-set operation command, determine that the displayed text has an error, when the displayed text is not identical to the recognized voice, and determine that the displayed text has no error when the displayed text do is identical to the recognized voice, and converting the recognized voice to text and modifying the displayed text when it is determined that the displayed text has the error.
4. Method according to claim 2 or 3,
[3]
3/9 characterized by the fact that the modification of the displayed text comprises:
converting the recognized voice to text and displaying the converted text from voice in a voice assistance window; and changing the text displayed in the text display window to the converted speech text displayed, when a preset function for changing the content of the text display window to the content of the voice assistance window is applied.
5. Method, according to claim 2 or 3, characterized by the fact that the execution of the pre-regulated operation comprises:
the execution of the pre-set operation when the recognized voice is the pre-set operation command; and the extraction of an execution result when the execution of the pre-set operation has completed, in which the display of the input text according to the key input or the touch input in the text display window is not interrupted during the execution of
pre-regulated operation. 6. Method, according with the claim 1, characterized by fact that the mode of pre-regulated operation be a state in what lists on one list menu pre-
regulated are displayed, and the execution of the pre-regulated operation comprises the execution of a scrolling operation of the displayed list according to the touch input in the displayed lists, and the assistance of the pre-regulated operation comprises the recognition of the voice introduced in the execution of -operation
[4]
4/9 scrolling of the displayed lists, and the execution of the pre-regulated operation by the application of the recognized voice when the recognized voice is a pre-regulated operation command.
7. Method according to claim 6, characterized in that the touch input is a scroll touch input, the preset operation command is a command set to automatically perform the scroll operation in the displayed list to a desired position , and the command set to automatically perform the scrolling operation in the list displayed for the desired position includes a command set to automatically perform a scrolling operation to a position of one or more words, a characteristic string or phrase, a position or a part of all lists, and a language position for each country.
8. Method, according to claim 1, characterized by the fact that the pre-regulated operation mode is a state in which one of the pages of a pre-regulated initial screen including pre-regulated items is displayed, the execution of the pre-operation -regulated understand the execution of a selection operation of one or more items on a page of the home screen according to the touch input, when a page of the home screen is displayed and then the movement of the selected items to another page of the initial screen, and the assistance of the preset operation comprises the recognition of the voice introduced in the execution of the movement operation of the selected items to another page of the initial screen, and the execution of the preset operation according to an operation command. pre-regulated by
[5]
5/9 application of the recognized voice, when the recognized voice is the pre-set operation command.
9. Method according to claim 8, characterized in that the touch input is a drag touch input, a drag operation includes an item positioning operation when there is space to place the moved items in a position in that the touch ends and returning the items to an original position in which there is no space;
the pre-set operation command includes a set command to automatically move one or more items on the home screen page to another page on the home screen, determining whether the recognized voice is the pre-set operation command comprises determining whether, when a or more items on the home screen page are dragged to another home screen page by tapping the items, there is space to place the dragged items on another home screen page, and determine if the recognized voice is the pre-regulated operation when items are dragged by applying touch to items when there is no space to position the dragged items, and the execution of the pre-regulated operation according to the recognized voice's pre-regulated operation command comprises the execution of the pre-set operation command, when the recognized voice is the pre-set operation command, and the end of the touch applied to the dragged items and the placement of the dragged items in a home screen page where the ringtone ends, when
[6]
6/9 space to place the dragged items, and where the pre-set operation command includes a set command for the generation of a new home screen page.
10. Device for performing a pre-regulated operation using voice recognition, the device characterized by the fact that it comprises:
an input / output module which includes at least one button and a physical or virtual keyboard receiving control input from a user, and a microphone receiving a voice input from the user;
a touch screen which receive an input from user control and displays a Image in execution one operating status and one state menu in a program app; and a controller, O which one controls O module of input / output and the screen in touch, and which includes a module recognition voice for voice recognition
entered by the input / output module, performs the pre-set operation according to a key input or touch input from the touch screen, and applies a recognized user voice received from the speech recognition module to assist in the execution of the pre-regulated operation.
11. Apparatus, according to claim 10, characterized by the fact that, in order to apply the recognized voice to assist in the execution of the pre-regulated operation, the controller displays a text received from the input / output module in a display window touch screen text, recognize the voice received from
[7]
7/9 of the input / output module, compare the displayed text with the recognized voice, determine that the displayed text has an error, when the displayed text is not identical to the recognized voice, and convert the recognized voice to text and
5 modify the displayed text when the displayed text has the error.
Apparatus, according to claim 10, characterized by the fact that, in order to apply the recognized voice to assist in the execution of the pre-regulated operation, the controller displays a text received from the input / output module, determining whether the recognized voice is a pre-set operation command, perform the pre-set operation, when the recognized voice is the pre-set operation command, compare the displayed text with the voice
15 recognized, determine that the displayed text has an error, when the displayed text is not identical to the recognized voice, and convert the recognized voice to text and modify the displayed text, when the displayed text has the error.
13. Apparatus according to claim 11 or 12,
20 characterized by the fact that, in order to modify the displayed text, the controller converts the recognized voice to text, displays the converted voice text in a touch screen voice assistance window, and selects to change the display window content text for content in the
25 voice assistance window, and change the text displayed in the text display window to the converted speech text displayed, when the user selects to change the content of the text display window to the content of the voice assistance window.
14. Apparatus according to claim 10,
[8]
8/9 characterized by the fact that the pre-set operation is a scroll operation in a list menu mode, in order to execute the pre-set operation according to a pre-set operation command by the application of the recognized voice , the controller recognizes a voice by applying the voice recognition module, when the voice is entered while the scroll operation is performed according to a touch input to perform the scroll operation on the touch screen on a pre list screen -regulada v list of menu mode, and perform the operation préregulada according to préregulada operation command while the scroll operation is performed when the voice is recognized préregulada operation command.
15. Apparatus according to claim 10, characterized by the fact that the pre-set operation is the editing of an initial screen operation, and, in order to execute the pre-set operation according to a pre-set operation command regulated by the application of the recognized voice, the controller determines whether there is a space for placing items on a page of a pre-regulated touchscreen home screen to which items are dragged, when one or more pre-regulated items are dragged from a home screen page to another home screen page between home screen pages, recognize a voice by applying the voice recognition module, when items are dragged by touch, perform the pre-set operation accordingly with the pre-set operation command, place the items in a position on the home screen page, where the touch ends, when there is space
[9]
9/9 to place the items and the touch applied to the items is finished, and return the items to an original position, when there is no space for placing the items and the touch applied to the items is finished.
1/14
类似技术:
公开号 | 公开日 | 专利标题
BR102013025963A2|2018-06-26|METHOD AND APPARATUS FOR IMPLEMENTATION OF A PRE-REGULATION MODE OF OPERATION USING VOICE RECOGNITION
US20190189125A1|2019-06-20|Contextual voice commands
KR102022318B1|2019-09-18|Method and apparatus for performing user function by voice recognition
KR101929301B1|2019-03-12|Method and apparatus for control actuating function through recognizing user's writing gesture in portable terminal
AU2014275609B2|2019-06-27|Portable terminal and user interface method in portable terminal
BR102014002492A2|2015-12-15|method and apparatus for multitasking
US20130257780A1|2013-10-03|Voice-Enabled Touchscreen User Interface
WO2011087880A1|2011-07-21|Automatic keyboard layout determination
KR20120020853A|2012-03-08|Mobile terminal and method for controlling thereof
KR20140134018A|2014-11-21|Apparatus, method and computer readable recording medium for fulfilling functions rerated to the user input on the screen
JP2015004977A|2015-01-08|Electronic device and method for conversion between audio and text
JP2013222458A|2013-10-28|Electronic device and method for inputting and managing user data
EP3822829A1|2021-05-19|Mail translation method, and electronic device
EP2806364B1|2021-10-27|Method and apparatus for managing audio data in electronic device
US20210096728A1|2021-04-01|Control Method and Electronic Device
AU2014221287A1|2014-09-25|Contextual voice commands
KR20140030398A|2014-03-12|Operating method for command pad and electronic device supporting the same
JP2017157057A|2017-09-07|Display control device
AU2018250484A1|2018-11-15|Contextual voice commands
同族专利:
公开号 | 公开日
RU2013144921A|2015-04-20|
US20190147879A1|2019-05-16|
AU2013237690B2|2018-07-05|
US20140100850A1|2014-04-10|
CN103716454A|2014-04-09|
KR102009423B1|2019-08-09|
US10825456B2|2020-11-03|
AU2013237690A1|2014-04-24|
KR20140045181A|2014-04-16|
JP6347587B2|2018-06-27|
EP2717259A3|2014-04-30|
EP2717259A2|2014-04-09|
JP2014078007A|2014-05-01|
EP2717259B1|2016-09-14|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6545669B1|1999-03-26|2003-04-08|Husam Kinawi|Object-drag continuity between discontinuous touch-screens|
CN1867068A|1998-07-14|2006-11-22|联合视频制品公司|Client-server based interactive television program guide system with remote server recording|
AT297046T|1999-07-08|2005-06-15|Koninkl Philips Electronics Nv|ADJUSTING A LANGUAGE IDENTIFIER TO CORRECTED TEXTS|
US7467089B2|2001-09-05|2008-12-16|Roth Daniel L|Combined speech and handwriting recognition|
US20030061053A1|2001-09-27|2003-03-27|Payne Michael J.|Method and apparatus for processing inputs into a computing device|
US6791529B2|2001-12-13|2004-09-14|Koninklijke Philips Electronics N.V.|UI with graphics-assisted voice control system|
JP2003195939A|2001-12-26|2003-07-11|Toshiba Corp|Plant monitoring controlling system|
US6895257B2|2002-02-18|2005-05-17|Matsushita Electric Industrial Co., Ltd.|Personalized agent for portable devices and cellular phone|
US7380203B2|2002-05-14|2008-05-27|Microsoft Corporation|Natural input recognition tool|
AU2002336458A1|2002-09-06|2004-03-29|Jordan R. Cohen|Methods, systems, and programming for performing speech recognition|
GB2433002A|2003-09-25|2007-06-06|Canon Europa Nv|Processing of Text Data involving an Ambiguous Keyboard and Method thereof.|
US7574356B2|2004-07-19|2009-08-11|At&T Intellectual Property Ii, L.P.|System and method for spelling recognition using speech and non-speech input|
US8942985B2|2004-11-16|2015-01-27|Microsoft Corporation|Centralized method and system for clarifying voice commands|
EA200800069A1|2005-06-16|2008-06-30|Фируз Гассабиан|DATA INPUT SYSTEM|
US7941316B2|2005-10-28|2011-05-10|Microsoft Corporation|Combined speech and alternate input modality to a mobile device|
WO2008064137A2|2006-11-17|2008-05-29|Rao Ashwin P|Predictive speech-to-text input|
US8219406B2|2007-03-15|2012-07-10|Microsoft Corporation|Speech-centric multimodal user interface design in mobile technology|
KR20090020265A|2007-08-23|2009-02-26|삼성전자주식회사|Mobile terminal and method for inputting message thereof|
US7877700B2|2007-11-20|2011-01-25|International Business Machines Corporation|Adding accessibility to drag-and-drop web content|
US8077975B2|2008-02-26|2011-12-13|Microsoft Corporation|Handwriting symbol recognition accuracy using speech input|
US8958848B2|2008-04-08|2015-02-17|Lg Electronics Inc.|Mobile terminal and menu control method thereof|
US20090326938A1|2008-05-28|2009-12-31|Nokia Corporation|Multiword text correction|
KR100988397B1|2008-06-09|2010-10-19|엘지전자 주식회사|Mobile terminal and text correcting method in the same|
US20110115702A1|2008-07-08|2011-05-19|David Seaberg|Process for Providing and Editing Instructions, Data, Data Structures, and Algorithms in a Computer System|
KR101502003B1|2008-07-08|2015-03-12|엘지전자 주식회사|Mobile terminal and method for inputting a text thereof|
KR101513635B1|2008-12-05|2015-04-22|엘지전자 주식회사|Terminal and method for controlling the same|
KR101613838B1|2009-05-19|2016-05-02|삼성전자주식회사|Home Screen Display Method And Apparatus For Portable Device|
US8661369B2|2010-06-17|2014-02-25|Lg Electronics Inc.|Mobile terminal and method of controlling the same|
US8359020B2|2010-08-06|2013-01-22|Google Inc.|Automatically monitoring for voice input based on context|
KR101718027B1|2010-09-09|2017-03-20|엘지전자 주식회사|Mobile terminal and memo management method thereof|
US8836638B2|2010-09-25|2014-09-16|Hewlett-Packard Development Company, L.P.|Silent speech based command to a computing device|
WO2013022221A2|2011-08-05|2013-02-14|Samsung Electronics Co., Ltd.|Method for controlling electronic apparatus based on voice recognition and motion recognition, and electronic apparatus applying the same|
JP6024675B2|2014-01-17|2016-11-16|株式会社デンソー|Voice recognition terminal device, voice recognition system, and voice recognition method|
US9606716B2|2014-10-24|2017-03-28|Google Inc.|Drag-and-drop on a mobile device|US10255566B2|2011-06-03|2019-04-09|Apple Inc.|Generating and processing task items that represent tasks to perform|
CN113470640A|2013-02-07|2021-10-01|苹果公司|Voice trigger of digital assistant|
CN103198831A|2013-04-10|2013-07-10|威盛电子股份有限公司|Voice control method and mobile terminal device|
US10170123B2|2014-05-30|2019-01-01|Apple Inc.|Intelligent assistant for home automation|
US9715875B2|2014-05-30|2017-07-25|Apple Inc.|Reducing the need for manual start/end-pointing and trigger phrases|
CN104252285B|2014-06-03|2018-04-27|联想有限公司|A kind of information processing method and electronic equipment|
CN105573534A|2014-10-09|2016-05-11|中兴通讯股份有限公司|Processing method and device of operation object|
KR102302721B1|2014-11-24|2021-09-15|삼성전자주식회사|Electronic apparatus for executing plurality of applications and method for controlling thereof|
US9886953B2|2015-03-08|2018-02-06|Apple Inc.|Virtual assistant activation|
US10146355B2|2015-03-26|2018-12-04|LenovoPte. Ltd.|Human interface device input fusion|
US10726197B2|2015-03-26|2020-07-28|LenovoPte. Ltd.|Text correction using a second input|
KR101632534B1|2015-04-06|2016-06-21|주식회사 카카오|Typing error processing method and user apparatus performing the method|
US10200824B2|2015-05-27|2019-02-05|Apple Inc.|Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device|
US20170024405A1|2015-07-24|2017-01-26|Samsung Electronics Co., Ltd.|Method for automatically generating dynamic index for content displayed on electronic device|
US10747498B2|2015-09-08|2020-08-18|Apple Inc.|Zero latency digital assistant|
US10586535B2|2016-06-10|2020-03-10|Apple Inc.|Intelligent digital assistant in a multi-tasking environment|
DK201670540A1|2016-06-11|2018-01-08|Apple Inc|Application integration with a digital assistant|
CN106940595B|2017-03-16|2019-10-11|北京云知声信息技术有限公司|A kind of information edit method and device|
JP6862952B2|2017-03-16|2021-04-21|株式会社リコー|Information processing system, information processing device, information processing program and information processing method|
CN108509175B|2018-03-30|2021-10-22|联想有限公司|Voice interaction method and electronic equipment|
US10928918B2|2018-05-07|2021-02-23|Apple Inc.|Raise to speak|
JP2019205027A|2018-05-22|2019-11-28|コニカミノルタ株式会社|Operation screen display device, image processing device, and program|
DK180639B1|2018-06-01|2021-11-04|Apple Inc|DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT|
DK179822B1|2018-06-01|2019-07-12|Apple Inc.|Voice interaction at a primary device to access call functionality of a companion device|
KR102136463B1|2018-07-27|2020-07-21|휴맥스|Smart projector and method for controlling thereof|
CN109361814A|2018-09-25|2019-02-19|联想有限公司|A kind of control method and electronic equipment|
KR20200099380A|2019-02-14|2020-08-24|삼성전자주식회사|Method for providing speech recognition serivce and electronic device thereof|
DK180129B1|2019-05-31|2020-06-02|Apple Inc.|User activity shortcut suggestions|
法律状态:
2018-06-26| B03A| Publication of an application: publication of a patent application or of a certificate of addition of invention|
2018-07-03| B08F| Application fees: dismissal - article 86 of industrial property law|
2018-10-30| B08K| Lapse as no evidence of payment of the annual fee has been furnished to inpi (acc. art. 87)|Free format text: EM VIRTUDE DO ARQUIVAMENTO PUBLICADO NA RPI 2478 DE 03-07-2018 E CONSIDERANDO AUSENCIA DE MANIFESTACAO DENTRO DOS PRAZOS LEGAIS, INFORMO QUE CABE SER MANTIDO O ARQUIVAMENTO DO PEDIDO DE PATENTE, CONFORME O DISPOSTO NO ARTIGO 12, DA RESOLUCAO 113/2013. |
优先权:
申请号 | 申请日 | 专利标题
KR1020120111402A|KR102009423B1|2012-10-08|2012-10-08|Method and apparatus for action of preset performance mode using voice recognition|
KR10-2012-0111402|2012-10-08|
[返回顶部]